Reference-less MR Thermometry Using Iteratively-Reweighted L1 Regression
نویسندگان
چکیده
Introduction Proton resonance frequency(PRF-) shift MR thermometry is a promising tool for monitoring thermal therapies. In PRF-shift thermometry, maps of relative temperature changes are estimated by subtracting image phase in a pretreatment (baseline) state from image phase in a heated state. Baseline phase can be obtained from a pretreatment image, however, this approach is sensitive to motion and reduces SNR by 2 . Reference-less thermometry methods [1,2] avoid these issues by estimating temperature from a single image via least-squares (L2) polynomial regression and extrapolation. To avoid temperature misestimation, current reference-less methods require that the hot spot be masked out of the polynomial regression. This complicates their application, since the user must either know a priori the location of the hot spot, or employ a sophisticated tracking algorithm to follow it [3]. We propose a new reference-less thermometry method that uses robust regression so that the hot spot need not be masked out. The method therefore requires no human interaction or tracking to obtain accurate temperature maps, and is inherently robust to motion. Theory Reference-less thermometry methods assume that in the absence of therapy-induced temperature changes, image phase varies smoothly over space and can be accurately represented as a superposition of low-order polynomial basis functions, the coefficients of which are estimated via regression. In this context, the phases at spatial locations within the hot spot are regarded as outliers whose influence on the estimates is to be avoided. L1 regression is a natural choice for this problem, since it is inherently robust to outliers. In our application, the L1-optimal polynomial coefficients are given by:
منابع مشابه
An Iteratively Reweighted Least Square Implementation for Face Recognition
We propose, as an alternative to current face recognition paradigms, an algorithm using reweighted l2 minimization, whose recognition rates are not only comparable to the random projection using l1 minimization compressive sensing method of Yang et al [5], but also robust to occlusion. Through numerical experiments, reweighted l2 mirrors the l1 solution [1] even with occlusion. Moreover, we pre...
متن کاملMultiplicative Updates for L1-Regularized Linear and Logistic Regression
Multiplicative update rules have proven useful in many areas of machine learning. Simple to implement, guaranteed to converge, they account in part for the widespread popularity of algorithms such as nonnegative matrix factorization and Expectation-Maximization. In this paper, we show how to derive multiplicative updates for problems in L1-regularized linear and logistic regression. For L1–regu...
متن کاملA comparison of typical ℓp minimization algorithms
Recently, compressed sensing has been widely applied to various areas such as signal processing, machine learning, and pattern recognition. To find the sparse representation of a vector w.r.t. a dictionary, an l1 minimization problem, which is convex, is usually solved in order to overcome the computational difficulty. However, to guarantee that the l1 minimizer is close to the sparsest solutio...
متن کاملOnline and Batch Supervised Background Estimation via L1 Regression
We propose a surprisingly simple model for supervised video background estimation. Our model is based on `1 regression. As existing methods for `1 regression do not scale to high-resolution videos, we propose several simple and scalable methods for solving the problem, including iteratively reweighted least squares, a homotopy method, and stochastic gradient descent. We show through extensive e...
متن کاملOutlier Detection Using Nonconvex Penalized Regression
This paper studies the outlier detection problem from the point of view of penalized regressions. Our regression model adds one mean shift parameter for each of the n data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1 penalty yields a convex criterion, but we find that it fails to deliver a robust estimator. The L1 penalty corresponds to ...
متن کامل